-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem: there's no compact historical state storage #722
Conversation
cc3ac31
to
760f02a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Found 14 potential problems in the proposed changes. Check the Files changed tab for more details.
760f02a
to
f6816f2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Found 14 potential problems in the proposed changes. Check the Files changed tab for more details.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Found 14 potential problems in the proposed changes. Check the Files changed tab for more details.
Codecov Report
@@ Coverage Diff @@
## main #722 +/- ##
==========================================
+ Coverage 33.99% 34.59% +0.59%
==========================================
Files 28 37 +9
Lines 1506 2246 +740
==========================================
+ Hits 512 777 +265
- Misses 941 1388 +447
- Partials 53 81 +28
|
if err := historyBatch.WriteSync(); err != nil { | ||
return err | ||
} | ||
return plainBatch.WriteSync() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will we have dirty data if one WriteSync
fails?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It happens in end blocker, so it's before the chain state commit, if any error happens here, the block is not committed, when restart, the block will be replayed and the versiondb is written again for the same block.
The writes here should be idempotent, so it should be fine I think.
It'll be in an inconsistency state before the block is replayed though. Any ideas how to check the inconsistency? maybe write a block number into the plain db.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh if eventually consistent, maybe just only allow query after block get replayed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can write a block number into plain db, that'd be the latest height, and we don't support query height higher than that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://github.com/crypto-org-chain/cronos/pull/722/files#diff-aa83981ed9bef3f7410fa0d5177ca4934c6e4b3c7280205511237da9b3dc02abR92
you've updated the latest version, so only if plainBatch.WriteSync()
succeeded and it will be updated. Right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, a batch write is atomic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Found 13 potential problems in the proposed changes. Check the Files changed tab for more details.
Some numbers get from local testing with rocksdb:
The ratio of versiondb/iavl file size reduction is: 0.18 Footnotes |
Just run some more tests to compare mdbx and rocksdb in terms of db size. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gosec found more than 10 potential problems in the proposed changes. Check the Files changed tab for more details.
if latest == height { | ||
return s.plainDB.Get(rawKey) | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe return nil or error here if height
> latest
? what's the expected behavior in this case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
= Sounds good
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I added a check, so it'll fail if height is in the future.
return s.plainDB.Get(rawKey) | ||
} | ||
|
||
found, err := SeekHistoryIndex(s.historyDB, rawKey, uint64(height)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
any chance found
will be equal to 0
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think no, it only finds height larger than target.
12c900a
to
085ebd5
Compare
for k, v := range s.transientStores { | ||
stores[k] = v | ||
} |
Check warning
Code scanning / CodeQL
Iteration over map
for _, key := range keys { | ||
s.transientStores[key] = transient.NewStore() | ||
} |
Check warning
Code scanning / CodeQL
Iteration over map
for _, key := range keys { | ||
s.transientStores[key] = mem.NewStore() | ||
} |
Check warning
Code scanning / CodeQL
Iteration over map
a246214
to
7c396f3
Compare
Closes: crypto-org-chain#704 Solution: - Integration version store and streaming service. multiple backends db size benchmark support mdbx backend through tm-db store latest version support GetLatestVersion query multistore test versiondb streamer fix lint fix state listener temp fix state listening optimize common case fix lint try to fix mdbx build update cosmos-sdk fix clone append add test case fix subkey problem in history index revert chunking, hard to work with variable length key support iterator check future height fix lint new state listener fix latest state in query fix integration test fix prune node test update dependency add utility to read from file streamer Update versiondb/multistore.go Signed-off-by: yihuang <huang@crypto.com> add unit test create common backend test cases update dependency update with new file streamer format Problem: python3.10 is not used in integration tests Solution: - start using python3.10, prepare to later PRs which need the new features - update nixpkgs nesserary for the nix stuff to work. python-roaring64 remove debug log add test cases, improve coverage
936cc90
to
7d1201a
Compare
4c1f751
to
850ed66
Compare
3285914
to
856b18f
Compare
changesetBatch := s.changesetDB.NewBatch() | ||
defer changesetBatch.Close() | ||
|
||
for _, pair := range changeSet { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will we have big changeSet, which need allocate capacity for key for reduce?
@@ -5,6 +5,7 @@ config { | |||
'cmd-flags': '--unsafe-experimental', | |||
'app-config'+: { | |||
'minimum-gas-prices': '100000000000basetcro', | |||
store:: super.store, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
might need rm store for these two as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only need this for upgrade test, because it's not supported in previous version.
versiondb/dbutils.go
Outdated
"github.com/RoaringBitmap/roaring/roaring64" | ||
) | ||
|
||
var ChunkLimit = uint64(1950) // threshold beyond which MDBX overflow pages appear: 4096 / 2 - (keySize + 8) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we also keep this chunk?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed
Co-authored-by: mmsqe <tqd0800210105@gmail.com> Signed-off-by: yihuang <huang@crypto.com>
there's a better approach to use user-timestamp feature of rocksdb v7: #791 |
Closes: #704
Solution:
👮🏻👮🏻👮🏻 !!!! REFERENCE THE PROBLEM YOUR ARE SOLVING IN THE PR TITLE AND DESCRIBE YOUR SOLUTION HERE !!!! DO NOT FORGET !!!! 👮🏻👮🏻👮🏻
PR Checklist:
make
)make test
)go fmt
)golangci-lint run
)go list -json -m all | nancy sleuth
)Thank you for your code, it's appreciated! :)